180 research outputs found

    Hierarchical Framework for Automatic Pancreas Segmentation in MRI Using Continuous Max-flow and Min-Cuts Approach

    Get PDF
    Accurate, automatic and robust segmentation of the pancreas in medical image scans remains a challenging but important prerequisite for computer-aided diagnosis (CADx). This paper presents a tool for automatic pancreas segmentation in magnetic resonance imaging (MRI) scans. Proposed is a framework that employs a hierarchical pooling of information as follows: identify major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; perform 3D segmentation by employing continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, curvature and position between distinct contours. The proposed method is evaluated on a dataset of 20 MRI volumes, achieving a mean Dice Similarity coefficient of 75.5 ± 7.0% and a mean Jaccard Index coefficient of 61.2 ± 9.2%

    AI Driven IoT Web-Based Application for Automatic Segmentation and Reconstruction of Abdominal Organs from Medical Images

    Get PDF
    Medical imaging technology has rapidly advanced in the last few decades, providing detailed images of the human body. The accurate analysis of these images and the segmentation of anatomical structures can produce significant morphological information, provide additional guidance toward subject stratification after diagnosis or before a clinical trial, and help predict a medical condition. Usually, medical scans are manually segmented by expert operators, such as radiologists and radiographers, which is complex, time-consuming and prone to inter-observer variability. A system that generates automatic, accurate quantitative organ segmentation on a large scale could deliver a clinical impact, supporting current investigations in subjects with medical conditions and aiding early diagnosis and treatment planning. This paper proposes a web-based application that automatically segments multiple abdominal organs and muscle, produces respective 3D reconstructions and extracts valuable biomarkers using a deep learning backend engine. Furthermore, it is possible to upload image data and access the medical image segmentation tool without installation using any device connected to the Internet. The final aim is to deliver a web- based image-processing service that clinical experts, researchers and users can seamlessly access through IoT devices without requiring knowledge of the underpinning technology

    Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation

    Get PDF
    Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as “pancreas” or “non-pancreas”. There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3 ± 4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6 ± 5.7% and 81.6 ± 5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches

    A Framework for Automatic Morphological Feature Extraction and Analysis of Abdominal Organs in MRI Volumes

    Get PDF
    The accurate 3D reconstruction of organs from radiological scans is an essential tool in computer-aided diagnosis (CADx) and plays a critical role in clinical, biomedical and forensic science research. The structure and shape of the organ, combined with morphological measurements such as volume and curvature, can provide significant guidance towards establishing progression or severity of a condition, and thus support improved diagnosis and therapy planning. Furthermore, the classification and stratification of organ abnormalities aim to explore and investigate organ deformations following injury, trauma and illness. This paper presents a framework for automatic morphological feature extraction in computer-aided 3D organ reconstructions following organ segmentation in 3D radiological scans. Two different magnetic resonance imaging (MRI) datasets are evaluated. Using the MRI scans of 85 adult volunteers, the overall mean volume for the pancreas organ is 69.30 ± 32.50cm3, and the 3D global curvature is (35.23 ± 6.83) × 10−3. Another experiment evaluates the MRI scans of 30 volunteers, and achieves mean liver volume of 1547.48 ± 204.19cm3 and 3D global curvature (19.87 ± 3.62) × 10− 3. Both experiments highlight a negative correlation between 3D curvature and volume with a statistical difference (p < 0.0001). Such a tool can support the investigation into organ related conditions such as obesity, type 2 diabetes mellitus and liver disease

    Cognitive behaviour analysis based on facial information using depth sensors

    Get PDF
    Cognitive behaviour analysis is considered of high impor- tance with many innovative applications in a range of sectors including healthcare, education, robotics and entertainment. In healthcare, cogni- tive and emotional behaviour analysis helps to improve the quality of life of patients and their families. Amongst all the different approaches for cognitive behaviour analysis, significant work has been focused on emo- tion analysis through facial expressions using depth and EEG data. Our work introduces an emotion recognition approach using facial expres- sions based on depth data and landmarks. A novel dataset was created that triggers emotions from long or short term memories. This work uses novel features based on a non-linear dimensionality reduction, t-SNE, applied on facial landmarks and depth data. Its performance was eval- uated in a comparative study, proving that our approach outperforms other state-of-the-art features

    A Modular Deep Learning Framework for Scene Understanding in Augmented Reality Applications

    Get PDF
    Taking as input natural images and videos augmented reality (AR) applications aim to enhance the real world with superimposed digital contents enabling interaction between the user and the environment. One important step in this process is automatic scene analysis and understanding that should be performed both in real time and with a good level of object recognition accuracy. In this work an end-to-end framework based on the combination of a Super Resolution network with a detection and recognition deep network has been proposed to increase performance and lower processing time. This novel approach has been evaluated on two different datasets: the popular COCO dataset whose real images are used for benchmarking many different computer vision tasks, and a generated dataset with synthetic images recreating a variety of environmental, lighting and acquisition conditions. The evaluation analysis is focused on small objects, which are more challenging to be correctly detected and recognised. The results show that the Average Precision is higher for smaller and low resolution objects for the proposed end-to-end approach in most of the selected conditions

    Intraclass Clustering-Based CNN Approach for Detection of Malignant Melanoma

    Get PDF
    This paper describes the process of developing a classification model for the effective detection of malignant melanoma, an aggressive type of cancer in skin lesions. Primary focus is given on fine-tuning and improving a state-of-the-art convolutional neural network (CNN) to obtain the optimal ROC-AUC score. The study investigates a variety of artificial intelligence (AI) clustering techniques to train the developed models on a combined dataset of images across data from the 2019 and 2020 IIM-ISIC Melanoma Classification Challenges. The models were evaluated using varying cross-fold validations, with the highest ROC-AUC reaching a score of 99.48%

    An AI-Assisted Skincare Routine Recommendation System in XR

    Get PDF
    In recent years, there has been an increasing interest in the use of artificial intelligence (AI) and extended reality (XR) in the beauty industry. In this paper, we present an AI-assisted skin care recommendation system integrated into an XR platform. The system uses a convolutional neural network (CNN) to analyse an individual's skin type and recommend personalised skin care products in an immersive and interactive manner. Our methodology involves collecting data from individuals through a questionnaire and conducting skin analysis using a provided facial image in an immersive environment. This data is then used to train the CNN model, which recognises the skin type and existing issues and allows the recommendation engine to suggest personalised skin care products. We evaluate our system in terms of the accuracy of the CNN model, which achieves an average score of 93% in correctly classifying existing skin issues. Being integrated into an XR system, this approach has the potential to significantly enhance the beauty industry by providing immersive and engaging experiences to users, leading to more efficient and consistent skincare routines

    A Framework for Morphological Feature Extraction of Organs from MR Images for Detection and Classification of Abnormalities

    Get PDF
    In clinical practice, a misdiagnosis can lead to incorrect or delayed treatment, and in some cases, no treatment at all; consequently, the condition of a patient may worsen to varying degrees, in some cases proving fatal. The accurate 3D reconstruction of organs, which is a pioneering tool of medical image computing (MIC) technology, plays a key role in computer aided diagnosis (CADx), thereby enabling medical professionals to perform enhanced analysis on a region of interest. From here, the shape and structure of the organ coupled with measurements of its volume and curvature can provide significant guidance towards establishing the severity of a disorder or abnormality, consequently supporting improved diagnosis and treatment planning. Moreover, the classification and stratification of organ abnormalities is widely utilised within biomedical, forensic and MIC research for exploring and investigating organ deformations following injury, illness or trauma. This paper presents a tool that calculates, classifies and analyses pancreatic volume and curvature following their 3D reconstruction. Magnetic resonance imaging (MRI) volumes of 115 adult patients are evaluated in order to examine a correlation between these two variables. Such a tool can be utilised in the scope of much greater research and investigation. It can also be incorporated into the development of effective medical image analysis software application in the stratification of subjects and targeting of therapies

    3D CATBraTS: Channel Attention Transformer for Brain Tumour Semantic Segmentation

    Get PDF
    Brain tumour diagnosis is a challenging task yet crucial for planning treatments to stop or slow the growth of a tumour. In the last decade, there has been a dramatic increase in the use of convolutional neural networks (CNN) for their high performance in the automatic segmentation of tumours in medical images. More recently, Vision Transformer (ViT) has become a central focus of medical imaging for its robustness and efficiency when compared to CNNs. In this paper, we propose a novel 3D transformer named 3D CATBraTS for brain tumour semantic segmentation on magnetic resonance images (MRIs) based on the state-of-the-art Swin transformer with a modified CNN-encoder architecture using residual blocks and a channel attention module. The proposed approach is evaluated on the BraTS 2021 dataset and achieved quantitative measures of the mean Dice similarity coefficient (DSC) that surpasses the current state-of-the-art approaches in the validation phase
    corecore